专利摘要:
METADATA LINKED IDENTIFICATION SYSTEM, IMAGE SEARCH METHOD, AND DEVICEA metadata linked identification system, an image search method, a device and a linked gesture identification method are provided. The linked metadata identification system includes a first device that identifies the metadata linked to an image and transmits the identified image linked to the metadata and a second device that allows at least one image among the stored images to be searched. Consequently, the generated data can be searched and used more easily and conveniently.
公开号:BR112012002919A2
申请号:R112012002919-3
申请日:2010-08-20
公开日:2020-08-11
发明作者:Keum-Koo Lee;Ji-young Kwahk;Seung-dong Yu;Hyun-joo Oh;Yeon-hee ROH;Joon-Hwan Kim;Soung-min YOO;Min-jung PARK
申请人:Samsung Electronics Co., Ltd.;
IPC主号:
专利说明:

METADATA LINKED IDENTIFICATION SYSTEM, IMAGE SEARCH METHOD, AND DEVICE Technical Field The present invention relates in general to a linked metadata identification system, an image search method and device and a method for identifying a gesture of the same and, more particularly, to a system of linked metadata identification to search for an image based on linked identified metadata, to an image search method and device and to a method to identify a gesture of the same.
Background to the Technique In general, a user searches for data by entering a keyword related to the data or selecting a keyword from the keywords provided by a system. To do this, metadata indicating data content must be recorded in association with a database that is prepared in advance.
However, a user must update this metadata association record with the database frequently to fetch the data quietly, and this can cause inconvenience to the user. That is, if the metadata is not properly associated with the database through registration, the user may not be able to search the desired data effectively among large amounts of data.
Therefore, a method is required to extract the metadata automatically, without introducing the registration of an association through a user, in order to allow the user to search the data effectively based on the extracted metadata.
Disclosure of Invention Technical Problem Aspects of the modalities of the present invention refer to a linked metadata identification system to easily and conveniently search for data, an image search method and device, and a method for identifying your gestures .
Problerna solution A metadata-linked identification system, according to a modality of the present invention, includes a first device that analyzes an image, extracts at least one metadata with respect to the image, identifies the metadata with the image, and transmits the image identified linked to the metadata for a third device, and a second device that allows at least one image among the images stored on the third device to be searched based on the metadata.
The first device analyzes an image using at least one of: a face recognition algorithm,
a configuration analysis algorithm, and a color analysis algorithm, and extract the metadata with respect to at least one of: figure, location and object, included in the image.
The second device selects at least one text 5 from the texts displayed on the second device and transmits the selected text to the third device, so that at least one image from the images stored on the third device is searched based on the selected text.
The third device fetches an image based on the identified metadata linked to the images stored in the third device, and transmits the image sought to the second device.
An image search method, according to one embodiment of the present invention, includes extracting at least one metadata with respect to the image-n by means of image analysis, identifying the metadata in a way linked to the image and storing the image with identified metadata linked on an external device, and search for at least one imagew among the images stored on the external device based on the data.
A device, according to an embodiment of the present invention, includes a handling unit that receives a gesture input from a user and a controller that identifies the input gesture in a way linked to the specific content metadata.
A method of linked gesture identification, in accordance with an embodiment of the present invention, includes receiving a user's gesture input and identifying the gesture received from the user in a way linked to the specific content metadata 5.
The linked gesture identification method also includes executing the specific content if the gesture is introduced through the handling unit, while the gesture is identified in a way linked to the metadata of the specific content.
Advantageous Effects of the Invention As described above, the data generated is analyzed and the metadata is identified and stored.
Therefore, data can be searched and used in an easier and more convenient way. In addition, as a gesture is also identified in a way linked to metadata, a user can search for the desired content using the gesture.
Brief Description of Drawings FIG. 1 illustrates a metadata linked identification system according to an embodiment of the present invention.
FIG. 2 illustrates a method for linked metadata identification according to an embodiment of the present invention.
FIG. 3 illustrates a method for linked metadata identification according to another embodiment of the present invention.
FIG. 4 illustrates an image with identified metadata linked on a screen.
FIG. 5 illustrates a method for searching for an image.
FIGS. 6-10 illustrate search methods on a screen.
FIG. 11 illustrates images searched for by a central storage device that are transmitted to an MP3P and displayed on an MP3P screen.
FIG. 12 is a schematic block diagram of a mobile phone.
FIG. 13 is a schematic block diagram of a central storage device.
FIGS. 14-19 illustrate a process of identifying a user's gesture in a way linked to the metadata of music content using a cell phone in accordance with one embodiment of the present invention - FIGS. 20-22 illustrate a process of executing content using a gesture according to an embodiment of the present invention, and FIGS. 23-27 illustrate a process of changing an identified gesture linked to the metadata of music content using a mobile phone.
Best Mode for Carrying Out the Invention Reference will now be made, in detail, to the modalities of the present invention, with reference to the accompanying drawings. The following detailed description includes 5 specific details to provide a complete understanding of the present invention. However, it will be apparent to those skilled in the art that the present invention can be practiced without such specific details.
Configuration and operation flow of the metadata linked identification system Figure 1 is a linked metadata identification system according to a modality of the present invention. The linked metadata identification system extracts the metadata from a photographed image and stores the photographed image after identifying the extracted metadata linked to the photographed image, in order to make it easier for a user to search for the photographed images.
The linked data identification system comprises a mobile phone 100, a central storage device 200, and peripheral devices 310, 320, 330, 340.
The mobile phone 100 can be used not only to have a telephone communication with another communication subscriber, but also to photograph an object and store the photographed image in the central storage device 200, which will be explained below.
The mobile phone 100 photographs an object using a built-in camera, extracts information about the properties of the image, that is, the metadata of the image by analyzing the image of the photographed object and transmits the image to the central storage device 200, after identification of extracted metadata linked to the image so that the identified image can be stored in the central storage device 200. The method for identifying metadata linked to an image will be explained below.
In general, the central storage device 200 is a fixed storage device that stores the images received from the mobile phone 100 and transmits images that are recalled in response to a command from one of the peripheral devices 310, 320, 330, 340 to one of the peripheral devices 310, 320, 330, 340.
Of course, the central storage device 200 can store the images generated not only by the mobile phone 100, but also by the peripheral device 310, and can transmit the images not only to the peripheral devices 310, 320, 330, 340, but also to the mobile phone 100.
Peripheral devices 310, 320, 330, 340 include all devices that are portable, and examples of peripheral devices 310, 320, 330, 340 include an MP3P
310, a 320 digital camera, a 330 TV, and a video monitor
340.
Peripheral devices 310, 320, 330, 340 can also access central storage device 200 and 5 to receive images stored in central storage device 200. In particular, peripheral devices 310, 320, 330, 340 can receive part of the stored images on the central storage device of 200 and can select various search methods to select the part of the stored images. The search methods will be explained below.
Figure 2 illustrates a method for linked metadata identification according to an embodiment of the present invention. Next, the MP3P 310 will be referred to as a device that represents peripheral devices for convenience of explanation.
If mobile phone 100 photographs an object and generates an image in step S410, mobile phone 100 analyzes the image generated in step S420 and extracts image-related metadata in step S430.
Metadata can be extracted using the methods below.
First, metadata can be extracted by analyzing a generated image. The images can be analyzed using a face recognition algorithm, a configuration analysis algorithm, etc.
Using such algorithms, the mobile phone 100 can extract information about a man, an animal, a place, the objects around it or the movement of the man or animal in a generated image.
5 For example, mobile phone 100 can identify a face erase a generated image and obtain information regarding a figure by analyzing a specific outline of the face and locations of the ear, eye, nose, mouth on the face. In addition, the mobile phone 100 can also obtain information about a place based on a landmark, a landmark or a sign plate included in the generated image.
Second, metadata can be extracted based on information related to the date / time when an image is generated. Date / time information that is stored in the mobile phone 100 automatically when an object is photographed, can be used.
For example, if the object is photographed on May 1, 2009, this data is stored on the mobile phone 100, and based on the stored data, the mobile phone 100 can extract the data.
Third, metadata can be extracted based on information about a location where an image is generated using a GPS signal receiver. The GPS signal receiver identifies a specific location of an object when the object is photographed and transmits the information to the mobile phone 100, and the mobile phone
100 can extract metadata based on the information transmitted.
Fourth, metadata can be extracted based on information entered directly by a user. That is, if the user photographs an object and enters information in relation to the photographed image, the mobile phone 100 can extract metadata based on the information entered.
Meanwhile, cell phone 100 identifies the data extracted linked to an image in step S440. If there is a plurality of metadata, the mobile phone 100 identifies the plurality of metadata in a way linked to the image.
If several metadata are extracted, mobile phone 100 can select a portion of the metadata from the plurality of metadata, and the selected metadata, and identify the metadata in a manner linked to the image. Here, mobile phone 100 can select metadata according to a user's usage pattern or a user-defined priority.
Here, the user's usage pattern refers to the user's pattern for using metadata to search for an image. Specifically, mobile phone 100 can select the most frequently used metadata and identify the metadata linked to an image.
For example, if the most frequently used metadata is metadata with respect to face recognition, the mobile phone 100 can select only metadata with respect to face recognition among the extracted metadata and identify the selected metadata in a way 5 linked to the image.
The priority set by a user refers to a priority that c) the user has set for each metadata. Mobile phone 100 selects the metadata with the highest priority and identifies the selected metadata in a manner linked to the image. For example, if the user establishes metadata extracted through face recognition as a first priority, the mobile phone 100 selects the metadata extracted through face recognition from the various metadata and identifies the selected metadata linked to the image.
As such, if multiple metadata are extracted, mobile phone 100 can select a portion of the various metadata and identify the selected metadata in a linked way to the image.
If the metadata is linked to the image, the mobile phone 100 transmits the identified image to the central storage device 200 in step S450.
The central storage device 200 stores the image identified with metadata in step S460.
Meanwhile, MP3P 310 receives an image search method from a user to selectively receive a desired image among the identified images and stored in the central storage device 200 in step 5 S470.
If a user selects an image search method, the MP3P 310 asks the central storage device 200 to search for an image according to the method selected in step S480, and the central storage device 200 searches for and extracts an image according to the search method requested in step S490 and transiting the extracted images to MP3P 310 in step S500.
As such, mobile phone 100 analyzes a photographed image and identifies metadata, and so on. q MP3P 310 can use the photographed image by searching for the photographed image from the mobile phone 100 more easily and conveniently.
Hgura 3 illustrates a method for linked metadata identification according to another embodiment of the present invention.
If mobile phone 100 photographs an object and generates an image in step S510, mobile phone 100 transmits the image generated to the central storage device 200 in step S520.
The central storage device 200 analyzes the image received in step S530, extracts the metadata from the image analyzed in step S540, and identifies the metadata extracted in a linked way to the image received in step S550. [IN FIGURE 3, CHANGE BOX 530 TO SAY "ANALYZE", AND CHANGE BOX 540 TO SAY "EXTRACT".] In addition, the central storage device 200 stores the image for which the metadata is identified in step S560 .
Meanwhile, the MP3P 310 receives an image search method from a user to selectively receive a desired image from the images identified and stored in the central storage device 200 in step S570.
If a user selects an image search method, MP3P 310 asks the central storage device 200 to search for an image according to the method selected in step S580, and the central storage device 200 searches for and extracts an imagein according to the search method requested in step S590 and transpose the extracted images to MP3P 310 in step S600.
As such, the image photographed via the mobile phone 100 is transmitted to the central storage device 200, and the central storage device 200 analyzes the received image and identifies the metadata.
Therefore, the MP3P 310 may be able to use the photographed image by searching for the photographed image from the mobile phone 100 more easily and conveniently.
Linked identification screen and search screen Figure 4 illustrates an image with identified metadata linked on a screen. In Figure 4, it is assumed that metadata are identified in a linked way on the mobile phone 100 as in the modality of Figure 2 for convenience of explanation.
The metadata identified in a linked form 620 is displayed on the screen along with the photographed image.
izi Firstly, with reference to the photographed image 610, a person and a desk appear as objects. the mobile phone 100 analyzes the image 100 and extracts "Office", "suji", "Red" and "Desk" from the metadata.
Specifically, the mobile phone recognizes that "Desk" is included in image 610 using a configuration analysis algorithm, particularly by comparing the result of the pre-storage image configuration analysis in which the "Desk" metadata is identified with the result analysis and configuration of the photographed image 610.
In addition, mobile phone 100 recognizes that the location of the photographed image 610 is "Office" using the configuration analysis algorithm. In particular, if it is determined that some images in which the "Desk" metadata are identified among the pre-stored images are photographed in the same place as the photographed image 610 after comparing the result of the configuration analysis of some images to which the metadata of "Desk" are identified among the pre-stored images with the result of the configuration analysis of the photographed image 610, "Office" which are other image metadata which the "Desk" metadata is identified can be extracted as image metadata photographed 610.
Similarly, mobile phone 100 can recognize that the person in the photographed image 610 is "suji" using the face recognition algorithm. Particularly, if it is determined that the person in some images, in which the "suji" metadata is identified in a linked way from the pre-stored images, it is the same person in the photographed image 610 after comparing the face recognition result of some images for which the "suji" metadata is linked to from pre-stored images with the result of face recognition of the photographed image 610, "sují" which are metadata of the images which portrayed the same "suji" "like the photographed image 610, can be extracted as metadata from the photographed image 610.
Furthermore, if it is determined that "Red" is a dominant color among the colors of the photographed image 610 with '25 .based on color analysis in the photographed image 610, the mobile phone 100 can extract "Red" as metadata.
13Second, if a user photographs the image 610 through the mobile phone 100, the mobile phone 100 stores the data when the image 610 is photographed. Based on this information, the mobile phone 100 extracts the date / time when the image 610 is generated as metadata.
In this mode, the date when the 610 image is photographed is "May 1, 2009".
w Third, if a user photographs the image 610 using the mobile phone 100, the mobile phone 100 stores information regarding the place where the image 610 is photographed using a built-in GPS signal receiving tool. Based on this information, mobile phone 100 extracts the place when image 110 is generated as metadata. In this mode, the place where the 610 image is photographed is "Seoul".
modality, metadata such as "Office", "suji", "Seoul", "05-05-2009", "Verrnelha", and "Desk" are extracted using the face recognition algorithm, the configuration analysis algorithm, and the color analysis algorithm, but this is just an example. Metadata can be extracted in other ways.
Êà As described above, metadata can be extracted based on what is entered by a user.
Through the above process, metadata with respect to a photographed image is stored together with the image in the central storage device 200, so that a user can search for the images more easily and conveniently using the stored data.
.5 Next, an image search and display process will be explained with reference to Figures 5 to 11.
Figure 5 illustrates a method for searching for an image.
If a user enters a command to search for an image using the MP3P 310, the MP3P 310 display shows a screen for selecting a search method. The screen displays @ date / time search, ® text search, © map search, and @ motion search. The user can search for images stored in the central storage device 200 by selecting a method from among the search methods.
Figures 6 to 10 illustrate each of the search methods on a screen.
If a user selects ® date / time search, a calendar screen is displayed on the MP3P 310 as shown in Figure 6.
Consequently, the user can select a desired date from the calendar displayed in MP3P 310, and MP3P 310 requests the central storage device 200 to transmit the images that have the selected data as metadata.
Subsequently, the MP3P 310 receives the search result for the images that have the selected date as metadata from the central storage device
200. The search result can be the images themselves or 5 separate information regarding the images.
The above modalities are only examples, and the present invention is not limited to a search method by selecting a date.
However, if a user selects ® text search, q the MP3P 310 display displays several texts simultaneously as shown in Figure 7. The text refers to the metadata, and the text can be displayed on the screen at random or the size or place of the texts can be designated on the screen considering aspects such as importance and frequency.
Specifically, MP3P 310 can display metadata with high frequency in a different way to identify metadata from other metadata. For example, MP3P 310 can display metadata with the highest frequency in font, size and color that are different from other metadata. In addition, MP3P can place metadata at a higher frequency in the highest place.
Similarly, the MP3P 310 can display the metadata with the highest priority set by a user in a different way from the other metadata. For example, MP3P
310 can display metadata with the highest priority in font, size and color that are different from other metadata. In addition, ME'3P can place metadata with the highest priority, in the highest place.
5 Consequently, the user can select a desired text from the metadata texts displayed on the MP3P 310, and thus the MP3P 310 can request that the central storage device 200 transmit the images that have c) selected text as metadata according to the manipulation selection of the text by the user.
Subsequently, the MP3P 310 receives the search result for images that have selected text as metadata from the central storage device
200. The search result can be the images themselves or separate information regarding the images.
The above modalities are only examples, and the present invention is not limited to a search method by selecting a text.
However, if a user selects © map search, the MP3P 310 display shows an 830 map as shown in Figure 8.
Consequently, the user can select a desired place (region) on the map displayed on the MI 3P 310, and so the MP3P 310 can ask the central storage device 200 to transmit the images having the selected place (region) as metadata according to manipulation of place selection by the user.
Subsequently, the MP3P 310 receives the search result for images that have the selected location (region) as metadata from the central storage device 200. The search result can be the images themselves or separate information regarding the images.
Tight modalities are only examples, and the present invention is not limited to a search method 10 by selecting a place (region).
However, if a user selects @ motion search, the MP3P 310 initiates a photography mode as illustrated in Figure 9. Consequently, the user can photograph an 840 object via the MP3P 310, and the MP3P 310 15 extracts a specific portion 850 a from the photographed object as illustrated in Figure 10. Thus, the MP3P 310 can request that the central storage device 200 transmit the images that have the selected portion as metadata and images that have the same image as the image of the specific portion. .
a Subsequently, the MP3P 310 receives the search result on the images that have the selected portion from the central storage device 200. The search result can be the images themselves or 25 separate information regarding the images.
The above modalities are only examples, and the present invention is not limited to a search method by selecting motion.
Figure 11 illustrates the images sought by the central storage device 200 that are transmitted 5 to the MP3P 310 and displayed on an MP3P 310 screen.
In particular, Figure 11 illustrates that images 860 that have data "May 1, 2009" are fetched by the central storage device 200 based on the date entry for c) MP3P 310 and displayed on the MP3P 310.
Consequently, a user can search for the images stored in the central storage device 200 more easily and conveniently by manipulating the MP3P 310 which is a different device from the central storage device 200.
Configuration of each device for linked metadata identification Figure 12 is a schematic block diagram of the mobile phone.
Mobile phone 100 comprises a photography unit 110; an image processing unit 120; a display 130; a handling unit 140; a controller 150; a communication unit 160; a GPS signal receiving unit 170; and a storage unit 180.
The photography unit 110 photographs an object and transmits the photographed signal to the image processing unit 120.
The image processing unit 120 generates an image using the photographed image signal and transmits the generated image to the display 130. Consequently, the display 5 130 displays the received image on a screen. In addition, the image processing unit 120 transcribes the result of the image analysis generated using the face recognition algorithm, the configuration analysis algorithm, and the color analysis algorithm for controller 150.
In addition, image processing unit 120 adds metadata received from controller 150 to the generated image.
The manipulation unit 140 receives a manipulation command from the user. Specifically, the manipulation unit140 receives user manipulation related to the photograph of an object, image analysis, extraction and linked metadata identification, and transmission of linked identified metadata.
The handling unit 140 transmits a handling command entered from a user to the controller 150.
Controller 150 controls the overall operation of the mobile phone 100. In particular, controller 150 controls the image processing unit 120 to analyze images, extract metadata from the images based on the analysis result performed by the image processing unit 120, and identify the inetadados extracted linked to the images.
The communication unit 160 transmits the identified images 5 to the outside using a wireless communication method under the control of the controller 150.
The GPS signal receiving unit 170 receives information regarding the location of mobile phone 100, for example, from a satellite, and transmits the information to controller 150. If an image is generated, controller 150 extracts the information with respect to the location as metadata of the generated image.
The storage unit 180 stores the metadata, information regarding the generated image, information regarding the location, information regarding the image analysis, and programs necessary to perform the global function of the mobile phone 100. The storage unit 180 can be performed as a hard disk or a non-volatile memory.
As the configuration of peripheral devices 310, 320, 330, 340 such as MP3P 310 can be inferred from the configuration of the previously mentioned mobile phone 100, additional explanation will not be provided.
Figure 13 is a schematic block diagram of the central storage apparatus 200.
The central storage apparatus 200 comprises a communication unit 210, a controller 220, and a storage unit 230.
The communication unit 210 communicates with the mobile phone 100 and peripheral devices 310, 320, 5 330, 340 using a wireless communication method to transmit / receive images and to receive a request to search for images.
In particular, the communication unit 210 receives identified images linked from the mobile phone 100 and a request to search for images from peripheral devices 310, 320, 330, 340, and transmits the extracted images to peripheral devices 310, 320, 330 , 340.
Controller 220 controls the overall operation of central storage device 200. In particular, controller 220 controls in order to store images received from mobile phone 100 in storage unit 230 and fetches images stored in storage unit 230 according to a method selected by the peripheral devices 310, 320, 330, 340.
The storage unit 230 stores the linked, identified images, metadata, and programs necessary to perform the overall operation of the central storage device 200. The storage unit 230 can be realized as a hard disk, or a non-volatile memory.
In the above modality, the mobile phone 100 is presented as a device to generate an image, however this is only an exercise. The technical feature of the present invention can also be applied when not only the previously mentioned peripheral devices 310, 320, 330, 340, but also other devices are used as the device to generate an image.
In addition, a method of generating an image is not limited to generating an image through the action of photographing an object. the technical feature of the present invention can also be applied when an image is generated using other methods such as transferring an image over a network.
In addition, the technical feature of the present invention is not limited to an image. If an audio or text is generated, for example, metadata can also be identified in a way linked to the generated audio or text.
The previously mentioned metadata generation method and the search method are just examples. The technical feature of the present invention can also be applied when other methods are used.
Metadata with linked gesture The mobile phone 100 can also identify a user's gesture linked to metadata. Referring to Figure 12, the handling unit 140 of the mobile phone 100 receives a user gesture that corresponds to a user touch. Here, the gesture represents a certain touch format introduced by the user. The gesture consists of information regarding the format of a single 5-touch input from a user. Consequently, controller 150 of mobile phone 100 determines whether a gesture is the same as another gesture based on the shape of the touch, regardless of the location of the touch manipulation.
Controller 150 identifies the gesture inserted in a way linked to the metadata of specific content by matching the gesture introduced to that of information included in the metadata of the specific content.
If the gesture is introduced through the handling unit 140 while the gesture is identified in a way linked to the specific content metadata, the controller 150 executes the specific content.
The above operation will be explained in detail with reference to Figures 14 to 27, in which the premise is that the handling unit 140 of the mobile phone 100 can receive a touch manipulation from the user including an input via a touch screen.
Figures 14 to 19 are seen illustrating a process of identifying a user's gesture in a way linked to the metadata of music content using the mobile phone 100 according to an embodiment of the present invention. Here, the gesture represents a certain format of touch introduced by the user. The gesture consists of information regarding the format of a single touch introduced from a user. Consequently, mobile phone 100 determines whether one gesture is the same as the other gesture based on the shape of the touch, regardless of the location of the touch handle.
Figure 14 illustrates mobile phone 100 in which an album list is displayed on the screen. As shown in Figure 14, "Album 1" is highlighted (1010) in the album list expanded on the mobile phone 100. In this case, if a user touches a menu 1020, the mobile phone 100 displays a menu 1030, which shows the functions related to "Album 1" on the screen as shown in Figure 15.
Figure 15 illustrates mobile phone 100 in which menu 1030 showing functions related to "Album 1" is displayed on the screen. As shown in Figure 15, menu 1030 includes search, selective reproduction, cancellation, selective cancellation, putting in album, and introducing gesture
1035. Here, if entering a 1030 gesture is selected by a user, the mobile phone 100 displays a 1040 gesture entry window on the screen as shown in Figure 16.
Figure 16 illustrates the mobile phone 100 in which the 1040 gesture input window is displayed to receive a gesture related to "Album 1". The mobile phone 100 recognizes the touch handling input through the gesture input window 1040 as a gesture.
Figure 17 illustrates gesture 1 (1050) c) which is a gesture related to "Album 1" and which is introduced through the 1040 gesture introduction window by a user. As illustrated in Figure 17, if the gesture 15 (1050) is entered through the gesture input window 1040 by a user, the mobile phone 100 recognizes and stores the ring format created by the introduced gesture 1 (1050).
Figure 18 illustrates mobile phone 100 in which a message 1070 informing that the storage of the gesture is complete is displayed on the screen. If gesture 1 (1050) is entered and a confirmation icon 1060 is selected by a user, mobile phone 100 stores the format of the gesture 1 entered (1050) and identifies gesture 1 (1050) in a way linked to the album information content metadata by matching gesture 1 (1050) to "Album 1". In this case, the mobile phone 100 identifies the gesture 1 (1050) in a way linked to the metadata search information of all the content whose metadata album information is "Album 1" among the stored contents.
Figure 19 illustrates the structure of a content file in which a gesture is identified in a way linked to the metadata album information. As shown in Figure 19, the content file includes metadata and content data. Metadata includes information related to the album, artist, genre, and file name. In addition, it can be seen that "Album 1" is linked to album information, and gesture 1 is also linked to "Album 1". As such, mobile phone 100 identifies the gesture in a way linked to the metadata of the content by matching the gesture to one of the information included in the metadata.
Through the above process, mobile phone 100 can identify a gesture in a way linked to metadata.
Then, a method of executing content using the gesture will be explained with reference to Figures 20 to 22.
Figure 20 illustrates the mobile phone 100 on which a background screen 1110 is displayed. In that case, if a gesture 1120 is entered by a user, the mobile phone 100 recognizes the format of the entered gesture 1120.
It can be seen that gesture 1120 introduced in Figure 21 is the same as gesture 1 (1050) introduced in Figure 17. Consequently, the mobile phone 100 performs the content corresponding to "Album 1".
m Figure 22 illustrates a screen on which the mobile phone 100 reproduces the contents of "Album 1" first. As shown in Figure 22, if gesture 1120 is introduced, mobile phone 100 selects content from "Album 1" as a playlist and automatically plays a first selection of music from "Album 1".
As such, if a gesture is introduced, q mobile phone 100 searches for the identified gestures linked to the content metadata, selects the content to which c) the same gesture is linked linked as 5 metadata, e = mm * l automatically O selected content.
Consequently, a user can easily execute the desired content by simply introducing the gesture.
Then, a process of changing the gestures will be explained with reference to Figures 23 to 27, which illustrate a process of changing a gesture identified in a way linked to the metadata of music content using the mobile phone 100.
Figure 23 illustrates the mobile phone where an album list is displayed on the screen. As shown in Figure 23, "Album 1" is highlighted (1210) on the Album list displayed on the mobile phone 100. In this case, if the user touches a menu 1220, the mobile phone 100 displays a menu 1230, which shows the functions related to "Album 1" on the screen as shown in Figure 24.
Figure 24 illustrates the mobile phone 100 in which q menu 1230 showing the functions related to "Album 1" is displayed on the screen. As shown in Figure 24, menu 1230 includes search, selective play, cancel, selective cancel, put in the album, and edit gesture
1235. Here, if editing a 1235 gesture is selected by a user, the mobile phone 100 displays a 1240 gesture input window on the screen, as shown in Figure 25.
5 Figure 25 illustrates the mobile phone 100 in which the gesture input window 1240 is displayed to receive a gesture in relation to "Album 1". The mobile phone 100 recognizes the touch manipulation introduced through the gesture input window 1240 with a gesture.
Figure 26 illustrates that gesture 2 (1245) which is a gesture related to "Album 1" is introduced through the gesture introduction window 1240 by a user. As illustrated in Figure 26, if gesture 2 (1245) is entered through the gesture input window 1240 by a 1-5 user, mobile phone 100 changes the gesture identified in a way linked to "Album 1" from the gesture 1 (1050) for gesture 2 (1245).
Figure 27 illustrates that the gesture identified in a way linked to the metadata album information is changed to gesture 2 (1245). As shown in Figure 27, it can be seen that gesture 2 is identified in a linked way together with "Album 1". As such, mobile device 100 can change the identified gesture in a manner linked to metadata.
As described above, mobile phone 100 identifies a gesture in a way linked to the metadata, and thus content can be searched and reproduced, using the gesture identified in a way linked to the metadata.
In the above modalities, the content is defined to be music content, but this is just exaggerated. The content can also be image content, video content, or document content.
In addition, in the above modalities, the gesture is defined to correspond to an erroneous information metadata, but this is just an example. The gesture can be one of information constituting metadata. For example, metadata may include information related to the album, artist, genre, file name and gesture.
Although a few embodiments of the present invention have been shown and described, it would be considered by those skilled in the art that changes can be made to this modality without departing from the principles and essence of the invention, the scope of which is defined in the claims and their equivalents.
权利要求:
Claims (15)
[1]
1. LINKED METADATA IDENTIFICATION SYSTEM, characterized by understanding: a first device that analyzes an image, 5 extracts at least one metadata with respect to the image, identifies the metadata in a way linked to the image and transmits the identified image in a way linked to the metadata for a third device; and a second device that allows at least one image among the images stored on the third device to be searched based on the metadata.
[2]
2. Metadata linked identification system, according to claim 1, characterized in that the first device analyzes an image using at least one of: a face recognition algorithm, a configuration analysis algorithm and a color analysis algorithm , and extracts the metadata with respect to at least one of: figure, place and object included in the image.
[3]
3. Metadata linked identification system,
20. according to claim 1, characterized in that the first device stores information with respect to the date and time when the image is generated and extracts information with respect to the date and time with metadata.
[4]
4. Metadata linked identification system, according to claim 1, characterized by the first device extracting the metadata in relation to a place where the image is generated upon receipt of a GPS signal.
[5]
5. Metadata linked identification system, according to claim 1, characterized in that the first device extracts the metadata based on the information introduced through user manipulation after the image is generated.
[6]
6. Metadata linked identification system, according to any one of claims 1 to 5, characterized in that the second device selects at least one text from the texts displayed on the second device and transmits the selected text to the third device so that at least one image from the images stored on the third device is searched based on the selected text.
[7]
7. Metadata linked identification system according to any of claims 1 to 6, characterized in that the second device selects a specific date from a calendar displayed on the second device and transmits the selected date to the third device so that at least an image among the images stored on the third device is searched based on the selected date.
[8]
8. Metadata linked identification system according to any one of claims 1 to 7, characterized in that the second device selects a specific location on a map displayed on the second device and transmits the selected location to the third device so that at least an image among the images stored on the third device is searched based on the selected location.
[9]
9. Metadata linked identification system according to any one of claims 1 to 8, characterized in that the second device extracts a user's movement from an image photographed by the second device and transmits the extracted movement to the third device so that at least one image among the images stored in the third device is searched 'based on the movement.
[10]
10. Metadata linked identification system, according to any of claims 6 to 9, characterized in that the third device fetches an image based on the identified metadata linked to the images stored on the third device and transmits the sought image to the second device.
[11]
11. Metadata linked identification system according to any one of claims 1 to 10, characterized in that the first device selects part of the metadata from the extracted metadata based on the user's metadata usage pattern or based on priority and identifies selected metadata linked to an image.
[12]
12. IMAGE SEARCH METHOD, characterized by understanding: extracting at least one metadata with respect to the image through image analysis; 5 identify the metadata linked to the image and store the image with the identified metadata linked to an external device; and fetch at least one image from the images stored on the external device based on the metadata.
[13]
13. DEVICE, characterized by comprising: a handling unit that receives a gesture input from a user; and a controller that identifies the gesture introduced in a way linked to specific content metadata.
[14]
Device according to claim 13, characterized in that the handling unit receives a touch input through a user; and in which the gesture is a specific form of touch introduced by a user.
[15]
15. Device, according to any one of claims 13 and 14, characterized by the controller, if q gesture is introduced through the manipulation unit while q gesture is identified in a way linked to the metadata of the specific content, execute the specific '25 content .
类似技术:
公开号 | 公开日 | 专利标题
BR112012002919A2|2020-08-11|linked metadata identification system, image search method, and device
US9058375B2|2015-06-16|Systems and methods for adding descriptive metadata to digital content
US7694214B2|2010-04-06|Multimodal note taking, annotation, and gaming
WO2017092280A1|2017-06-08|Multimedia photo generation method, apparatus and device, and mobile phone
JP2007259415A|2007-10-04|Image processing apparatus and image processing method, server and control method of the same, and program and storage medium
US8004580B2|2011-08-23|Apparatus and method for managing images of mobile terminal
KR100788605B1|2007-12-26|Apparatus and method for serving contents
CN109189879A|2019-01-11|E-book display methods and device
JP2007108989A|2007-04-26|Information processing system and information processing method
KR20160069144A|2016-06-16|Terminal device, information display system and controlling method thereof
US10893137B2|2021-01-12|Photography guiding method, device, and system
CN106559814A|2017-04-05|The method and apparatus of connection WAP
JP2006254014A|2006-09-21|Dictionary retrieving system
JP2009055233A|2009-03-12|Image recorder, control program, computer readable recording medium, and image recording method
CN104978389B|2019-07-02|Method, system, server and client side
JP6930553B2|2021-09-01|Information processing system, information processing device, information processing method and program
CN104584503B|2018-08-10|It is operated using the striding equipment of gesture
JP2014096156A|2014-05-22|Information processor, control method, control program and recording medium
JP6115673B2|2017-04-19|Apparatus and program
JP5655916B2|2015-01-21|Image search system
JP2017157220A|2017-09-07|Imaging device
JP2015222493A|2015-12-10|Image search device and control method of the same as well as program
JP5532748B2|2014-06-25|Imaging device
KR101000350B1|2010-12-13|A system and method for automatically tagging photos with the people in them using short-range wireless devices
JP2011048509A|2011-03-10|Search support system
同族专利:
公开号 | 公开日
WO2011021907A2|2011-02-24|
AU2010284736B2|2016-01-28|
EP2467825A4|2016-09-21|
US10157191B2|2018-12-18|
RU2012110605A|2013-09-27|
KR20110020158A|2011-03-02|
CN104063175B|2019-05-03|
CN102473304B|2015-11-25|
JP5791605B2|2015-10-07|
KR101660271B1|2016-10-11|
JP2013502637A|2013-01-24|
EP2467825A2|2012-06-27|
CN104063175A|2014-09-24|
US20110047517A1|2011-02-24|
AU2010284736A1|2012-01-19|
WO2011021907A3|2011-06-23|
CN102473304A|2012-05-23|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5592608A|1993-10-15|1997-01-07|Xerox Corporation|Interactively producing indices into image and gesture-based data using unrecognized graphical objects|
US5832474A|1996-02-26|1998-11-03|Matsushita Electric Industrial Co., Ltd.|Document search and retrieval system with partial match searching of user-drawn annotations|
JP4302799B2|1998-09-16|2009-07-29|シャープ株式会社|Document search apparatus, method, and recording medium|
JP2000137555A|1998-11-02|2000-05-16|Sony Corp|Information processor, processing method and recording medium|
JP2003533770A|2000-05-05|2003-11-11|株式会社メガチップス|System and method for information acquisition and storage for deferred viewing|
JP4192729B2|2002-09-16|2008-12-10|富士ゼロックス株式会社|Method for highlighting free-form annotation, annotation highlighting device, and program for highlighting free-form annotation|
JP4031255B2|2002-02-13|2008-01-09|株式会社リコー|Gesture command input device|
US7200803B2|2002-06-27|2007-04-03|Microsoft Corporation|System and method for visually categorizing electronic notes|
US7840892B2|2003-08-29|2010-11-23|Nokia Corporation|Organization and maintenance of images using metadata|
JP2005354134A|2004-06-08|2005-12-22|Sony Corp|Image management method and device, recording medium, and program|
JP4684745B2|2005-05-27|2011-05-18|三菱電機株式会社|User interface device and user interface method|
US7702681B2|2005-06-29|2010-04-20|Microsoft Corporation|Query-by-image search and retrieval system|
CN101410825B|2006-02-27|2013-03-27|阜博有限公司|Systems and methods for publishing, searching, retrieving and binding metadata for a digital object|
JP4175390B2|2006-06-09|2008-11-05|ソニー株式会社|Information processing apparatus, information processing method, and computer program|
US8713079B2|2006-06-16|2014-04-29|Nokia Corporation|Method, apparatus and computer program product for providing metadata entry|
JP2008165424A|2006-12-27|2008-07-17|Sony Corp|Image retrieval device and method, imaging device and program|
US8473525B2|2006-12-29|2013-06-25|Apple Inc.|Metadata generation for image files|
JP5121285B2|2007-04-04|2013-01-16|キヤノン株式会社|Subject metadata management system|
US8244284B2|2007-06-28|2012-08-14|Giga-Byte Communications, Inc.|Mobile communication device and the operating method thereof|
JP2009017017A|2007-07-02|2009-01-22|Funai Electric Co Ltd|Multimedia playback device|
KR101513616B1|2007-07-31|2015-04-20|엘지전자 주식회사|Mobile terminal and image information managing method therefor|
KR101476174B1|2007-09-04|2014-12-24|엘지전자 주식회사|Portable terminal and method for executing a function in the portable terminal|
JP5096863B2|2007-10-09|2012-12-12|オリンパスイメージング株式会社|Search device|
US20090119572A1|2007-11-02|2009-05-07|Marja-Riitta Koivunen|Systems and methods for finding information resources|
KR101087134B1|2007-12-10|2011-11-25|한국전자통신연구원|Digital Data Tagging Apparatus, Tagging and Search Service Providing System and Method by Sensory and Environmental Information|
US8212784B2|2007-12-13|2012-07-03|Microsoft Corporation|Selection and display of media associated with a geographic area based on gesture input|
TW200937254A|2008-02-29|2009-09-01|Inventec Appliances Corp|A method for inputting control commands and a handheld device thereof|
US9501694B2|2008-11-24|2016-11-22|Qualcomm Incorporated|Pictorial methods for application selection and activation|
EP2380093B1|2009-01-21|2016-07-20|Telefonaktiebolaget LM Ericsson |Generation of annotation tags based on multimodal metadata and structured semantic descriptors|
KR20140039342A|2009-06-19|2014-04-02|알까뗄 루슨트|Gesture on touch sensitive input devices for closing a window or an application|
KR101611302B1|2009-08-10|2016-04-11|엘지전자 주식회사|Mobile terminal capable of receiving gesture input and control method thereof|US7822746B2|2005-11-18|2010-10-26|Qurio Holdings, Inc.|System and method for tagging images based on positional information|
JP5043748B2|2008-05-19|2012-10-10|キヤノン株式会社|CONTENT MANAGEMENT DEVICE, CONTENT MANAGEMENT DEVICE CONTROL METHOD, PROGRAM, AND RECORDING MEDIUM|
US8935259B2|2011-06-20|2015-01-13|Google Inc|Text suggestions for images|
US10089327B2|2011-08-18|2018-10-02|Qualcomm Incorporated|Smart camera for sharing pictures automatically|
WO2013052867A2|2011-10-07|2013-04-11|Rogers Henk B|Media tagging|
US9779114B2|2011-10-07|2017-10-03|Henk B. Rogers|Media geotagging|
KR101629588B1|2011-12-13|2016-06-13|인텔 코포레이션|Real-time mapping and navigation of multiple media types through a metadata-based infrastructure|
US9646313B2|2011-12-13|2017-05-09|Microsoft Technology Licensing, Llc|Gesture-based tagging to view related content|
WO2013114931A1|2012-01-30|2013-08-08|九州日本電気ソフトウェア株式会社|Image management system, mobile information terminal, image management device, image management method and computer-readable recording medium|
WO2014031015A1|2012-08-24|2014-02-27|Motorola Solutions, Inc.|Method and apparatus for authenticating digital information|
US9235342B2|2012-11-28|2016-01-12|International Business Machines Corporation|Selective sharing of displayed content in a view presented on a touchscreen of a processing system|
KR102090269B1|2012-12-14|2020-03-17|삼성전자주식회사|Method for searching information, device, and computer readable recording medium thereof|
US20140282012A1|2013-03-15|2014-09-18|Write Brothers, Inc.|Method and system for making storyforming choices based on associated objects|
US20140298274A1|2013-03-22|2014-10-02|Ntt Docomo, Inc.|Method and electronic device for processing data|
US9652460B1|2013-05-10|2017-05-16|FotoIN Mobile Corporation|Mobile media information capture and management methods and systems|
WO2014207188A1|2013-06-28|2014-12-31|Koninklijke Philips N.V.|Nearest available roadmap selection|
KR102165818B1|2013-09-10|2020-10-14|삼성전자주식회사|Method, apparatus and recovering medium for controlling user interface using a input image|
US20150081703A1|2013-09-16|2015-03-19|Google Inc.|Providing labels for photos|
KR20150032101A|2013-09-17|2015-03-25|삼성전자주식회사|Apparatus and Method for Display Images|
CN104636059B|2013-11-13|2018-04-13|展讯通信(上海)有限公司|The searching method and its system of account project|
US10049477B1|2014-06-27|2018-08-14|Google Llc|Computer-assisted text and visual styling for images|
US9881027B2|2014-12-31|2018-01-30|Ebay Inc.|Image appended search string|
US9563643B2|2015-06-25|2017-02-07|Intel Corporation|Automatic metatagging in images|
KR101634106B1|2015-09-25|2016-06-29|주식회사 지노시스템|A geographic information inquiry method of through location matching and space searching|
US10564924B1|2015-09-30|2020-02-18|Amazon Technologies, Inc.|Navigating metadata in long form content|
GB2558850B|2015-12-02|2021-10-06|Motorola Solutions Inc|Method for associating a group of applications with a specific shape|
US10613707B2|2015-12-10|2020-04-07|International Business Machines Corporation|Auditing icons via image recognition to provide individualized assets to software project teams|
DE102016202095A1|2016-02-11|2017-08-17|Bayerische Motoren Werke Aktiengesellschaft|Device for selecting elements from a database|
US10146758B1|2016-09-30|2018-12-04|Amazon Technologies, Inc.|Distributed moderation and dynamic display of content annotations|
KR20190109661A|2018-03-08|2019-09-26|한국전자통신연구원|Method for generating data for learning emotion in video, method for determining emotion in video, and apparatus using the methods|
KR20210104996A|2020-02-18|2021-08-26|주식회사 와이즈패션|Apparatus for supporting feature information tagging and operating method thereof|
KR20210105001A|2020-02-18|2021-08-26|주식회사 와이즈패션|Record medium on which program is recorded for supporting tagging of feature information in order data|
KR20210104999A|2020-02-18|2021-08-26|주식회사 와이즈패션|A operating method for supporting feature information tagging|
KR20210104997A|2020-02-18|2021-08-26|주식회사 와이즈패션|Apparatus for supporting feature information tagging|
KR20210104998A|2020-02-18|2021-08-26|주식회사 와이즈패션|Apparatus for supporting feature information tagging|
KR20210105000A|2020-02-18|2021-08-26|주식회사 와이즈패션|A program for executing a method on a computer that supports tagging feature information corresponding to standardized order data|
KR102216065B1|2020-05-04|2021-02-18|호서대학교 산학협력단|Method for providing search result for video segment|
法律状态:
2020-08-25| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-09-01| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2020-12-15| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-11-03| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
KR10-2009-0077507|2009-08-21|
KR20090077507|2009-08-21|
KR10-2010-0011496|2010-02-08|
KR1020100011496A|KR101660271B1|2009-08-21|2010-02-08|Metadata tagging system, image searching method, device, and method for tagging gesture|
PCT/KR2010/005564|WO2011021907A2|2009-08-21|2010-08-20|Metadata tagging system, image searching method and device, and method for tagging a gesture thereof|
[返回顶部]